Goto

Collaborating Authors

 differential equation learn general representation


Neural Networks Trained to Solve Differential Equations Learn General Representations

Neural Information Processing Systems

We introduce a technique based on the singular vector canonical correlation analysis (SVCCA) for measuring the generality of neural network layers across a continuously-parametrized set of tasks. We illustrate this method by studying generality in neural networks trained to solve parametrized boundary value problems based on the Poisson partial differential equation. We find that the first hidden layers are general, and that they learn generalized coordinates over the input domain. Deeper layers are successively more specific.


Reviews: Neural Networks Trained to Solve Differential Equations Learn General Representations

Neural Information Processing Systems

The authors introduce a technique to measure the generality (extend to which they transfer to another task) of hidden layers in neural networks. The authors do this by analyzing the singular vector canonical correlation (SVCCA), which makes use of the output of the hidden layers evaluated for different points in the network's input domain. The authors apply their technique on neural networks that are trained to solved differential equations. The results show the first two layers of a NN generalize, the third layer only generalizes depending on the width of the network and the last layer doesn't generalize. The paper is a novel application of SVCCA (introduced at NIPS '17 by Raghu et al) to the problem of measuring generality/transferability of different layers in a NN.

  differential equation learn general representation, experiment, neural network, (4 more...)

Neural Networks Trained to Solve Differential Equations Learn General Representations

Magill, Martin, Qureshi, Faisal, Haan, Hendrick de

Neural Information Processing Systems

We introduce a technique based on the singular vector canonical correlation analysis (SVCCA) for measuring the generality of neural network layers across a continuously-parametrized set of tasks. We illustrate this method by studying generality in neural networks trained to solve parametrized boundary value problems based on the Poisson partial differential equation. We find that the first hidden layers are general, and that they learn generalized coordinates over the input domain. Deeper layers are successively more specific. We find excellent agreement between the two methods, and note that our method is much faster, particularly for continuously-parametrized problems.